85 research outputs found

    Diagrams Based on Structured Object Perception

    Get PDF
    Most diagrams, particularly those used in software engineering, are line drawings consisting of nodes drawn as rectangles or circles, and edges drawn as lines linking them. In the present paper we review some of the literature on human perception to develop guidelines for effective diagram drawing. Particular attention is paid to structural object recognition theory. According to this theory as objects are perceived they are decomposed into 3D set of primitives called geons, together with the skeleton structure connecting them. We present a set of guidelines for drawing variations on node-link diagrams using geon-like primitives, and provide some examples. Results from three experiments are reported that evaluate 3D geon diagrams in comparison with 2D UML (Unified Modeling Language) diagrams. The first experiment measures the time and accuracy for a subject to recognize a sub-structure of a diagram represented either using geon primitives or UML primitives. The second and third experiments compare the accuracy of recalling geon vs. UML diagrams. The results of these experiments show that geon diagrams can be visually analyzed more rapidly, with fewer errors, and can be remembered better in comparison with equivalent UML diagrams

    Investigating Text Legibility on Non-Rectangular Displays

    Get PDF

    Exploring the use of hand-to-face input for interacting with head-worn displays

    Get PDF
    International audienceWe propose the use of Hand-to-Face input, a method to interact with head-worn displays (HWDs) that involves contact with the face. We explore Hand-to-Face interaction to find suitable techniques for common mobile tasks. We evaluate this form of interaction with document navigation tasks and examine its social acceptability. In a first study, users identify the cheek and forehead as predominant areas for interaction and agree on gestures for tasks involving continuous input, such as document navigation. These results guide the design of several Hand-to-Face navigation techniques and reveal that gestures performed on the cheek are more efficient and less tiring than interactions directly on the HWD. Initial results on the social acceptability of Hand-to-Face input allow us to further refine our design choices, and reveal unforeseen results: some gestures are considered culturally inappropriate and gender plays a role in selection of specific Hand-to-Face interactions. From our overall results, we provide a set of guidelines for developing effective Hand-to-Face interaction techniques

    Manipulating synthetic voice parameters for navigation in hierarchical structures

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)Auditory interfaces commonly use synthetic speech for conveying information. In many instances the information being conveyed is hierarchically structured, such as menus. In this paper, we describe the results of one experiment that was designed to investigate the use of multiple synthetic voices for representing hierarchical information. A hierarchy of 27 nodes was created (in which 2 of the nodes were not shown to the participants during the training session). A between subjects design (N-16) was conducted to evaluate the effect of multiple synthetic voices on recall rates. Two different forms of training were provided. Participant's tasks involved identifying the position of nodes in the hierarchy by listening to the synthetic voice. The results suggest that 84.38% of the participants recalled the position of the nodes accurately. The results also indicate that multiple synthetic voices can be used to facilitate navigation hierarchies. Overall, this study suggests that it is possible to use synthetic voices to represent hierarchies

    Desktop-Gluey: Augmenting Desktop Environments with Wearable Devices

    Get PDF
    International audienceUpcoming consumer-ready head-worn displays (HWDs) can play a central role in unifying the interaction experience in Distributed display environments (DDEs). We recently implemented Gluey, a HWD system that 'glues' together the input mechanisms across a display ecosystem to facilitate content migration and seamless interaction across multiple, co-located devices. Gluey can minimize device switching costs, opening new possibilities and scenarios for multi-device interaction. In this paper, we propose Desktop-Gluey, a system to augment situated desktop environments, allowing users to extend the physical displays in their environment, organize information in spatial layouts, and 'carry' desktop content with them. We extend this metaphor beyond the desktop to provide 'anywhere and anytime' support for mobile and collaborative interactions

    Multi-scale gestural interaction for augmented reality

    Get PDF
    We present a multi-scale gestural interface for augmented reality applications. With virtual objects, gestural interactions such as pointing and grasping can be convenient and intuitive, however they are imprecise, socially awkward, and susceptible to fatigue. Our prototype application uses multiple sensors to detect gestures from both arm and hand motions (macro-scale), and finger gestures (micro-scale). Micro-gestures can provide precise input through a belt-worn sensor configuration, with the hand in a relaxed posture. We present an application that combines direct manipulation with microgestures for precise interaction, beyond the capabilities of direct manipulation alone.Postprin

    DataLev: Mid-air Data Physicalisation Using Acoustic Levitation

    Get PDF
    Data physicalisation is a technique that encodes data through the geometric and material properties of an artefact, allowing users to engage with data in a more immersive and multi-sensory way. However, current methods of data physicalisation are limited in terms of their reconfgurability and the types of materials that can be used. Acoustophoresis—a method of suspending and manipulating materials using sound waves—ofers a promising solution to these challenges. In this paper, we present DataLev, a design space and platform for creating reconfgurable, multimodal data physicalisations with enriched materiality using acoustophoresis. We demonstrate the capabilities of DataLev through eight examples and evaluate its performance in terms of reconfgurability and materiality. Our work ofers a new approach to data physicalisation, enabling designers to create more dynamic, engaging, and expressive artefacts

    Kick: investigating the use of kick gestures for mobile interactions

    Get PDF
    In this paper we describe the use of kick gestures for interaction with mobile devices. Kicking is a well-st udied leg action that can be harnessed in mobile contexts where the hands are busy or too dirty to interact with the phone. In this paper we examine the design space of kicki ng as an interaction technique through two user studies. The first study investigated how well users were able to control the direction of their kicks. Users were able to aim their kicks best when the movement range is divided into segments of at least 24°. In the second study we looked at the velocity of a kick. We found that the users are able to kick with at least two varying velocities. However, they also often undershoot the target velocity. Finally, we propose some specific applications in which kicks can prove beneficial

    Counterpoint : exploring mixed-scale gesture interaction for AR applications

    Get PDF
    This paper presents ongoing work on a design exploration for mixed-scale gestures, which interleave microgestures with larger gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors. Future work toward expanding the design space and exploration is discussed, along with plans toward evaluation of mixed-scale gesture design.Postprin
    • …
    corecore